Improved two-stream model for human action recognition

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved Discriminative Model for View- Invariant Human Action Recognition

Recognizing human actions play an important role in applications like video surveillance. The recent past has witnessed an increasing research on view-invariant action recognition. Huang et al. proposed a framework based on discriminative model for human action recognition. This model uses STIP (Space – Time Interest Point) to extract motion features and view invariants. Then a discriminative m...

متن کامل

Two-Stream SR-CNNs for Action Recognition in Videos

Human action is a high-level concept in computer vision research and understanding it may benefit from different semantics, such as human pose, interacting objects, and scene context. In this paper, we explicitly exploit semantic cues with aid of existing human/object detectors for action recognition in videos, and thoroughly study their effect on the recognition performance for different types...

متن کامل

Two-Stream convolutional nets for action recognition in untrimmed video

We extend the two-stream convolutional net architecture developed by Simonyan for action recognition in untrimmed video clips. The main challenges of this project are first replicating the results of Simonyan et al, and then extending the pipeline to apply it to much longer video clips in which no actions of interest are taking place most of the time. We explore aspects of the performance of th...

متن کامل

Two-Stream Convolutional Networks for Action Recognition in Videos

We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: EURASIP Journal on Image and Video Processing

سال: 2020

ISSN: 1687-5281

DOI: 10.1186/s13640-020-00501-x